134 research outputs found

    Learning and consistency

    Get PDF
    In designing learning algorithms it seems quite reasonable to construct them in such a way that all data the algorithm already has obtained are correctly and completely reflected in the hypothesis the algorithm outputs on these data. However, this approach may totally fail. It may lead to the unsolvability of the learning problem, or it may exclude any efficient solution of it. Therefore we study several types of consistent learning in recursion-theoretic inductive inference. We show that these types are not of universal power. We give “lower bounds ” on this power. We characterize these types by some versions of decidability of consistency with respect to suitable “non-standard ” spaces of hypotheses. Then we investigate the problem of learning consistently in polynomial time. In particular, we present a natural learning problem and prove that it can be solved in polynomial time if and only if the algorithm is allowed to work inconsistently. 1

    One-Sided Error Probabalistic Inductive Interface and Reliable Frequency Identification

    Get PDF
    For EX- and BC-type identification, one-sided error probabilistic inference and reliable frequency identification on sets of functions are introduced. In particular, we relate the one to the other and characterize one-sided error probabilistic inference to exactly coincide with reliable frequency identification, on any setM. Moreover, we show that reliable EX and BC-frequency inference forms a new discrete hierarchy having the breakpoints 1, l/2, l/3, ..

    One-Sided Error Probabalistic Inductive Interface and Reliable Frequency Identification

    Get PDF
    For EX- and BC-type identification, one-sided error probabilistic inference and reliable frequency identification on sets of functions are introduced. In particular, we relate the one to the other and characterize one-sided error probabilistic inference to exactly coincide with reliable frequency identification, on any setM. Moreover, we show that reliable EX and BC-frequency inference forms a new discrete hierarchy having the breakpoints 1, l/2, l/3, ..

    On the Teachability of Randomized Learners

    Get PDF
    The present paper introduces a new model for teaching {em randomized learners}. Our new model, though based on the classical teaching dimension model, allows to study the influence of various parameters such as the learner\u27s memory size, its ability to provide or to not provide feedback, and the influence of the order in which examples are presented. Furthermore, within the new model it is possible to investigate new aspects of teaching like teaching from positive data only or teaching with inconsistent teachers. Furthermore, we provide characterization theorems for teachability from positive data for both ordinary teachers and inconsistent teachers with and without feedback

    On Learning of Functions Refutably

    Get PDF
    Learning of recursive functions refutably informally means that for every recursive function, the learning machine has either to learn this function or to refute it, that is to signal that it is not able to learn it. Three modi of making precise the notion of refuting are considered. We show that the corresponding types of learning refutably are of strictly increasing power, where already the most stringent of them turns out to be of remarkable topological and algorithmical richness. Furthermore, all these types are closed under union, though in different strengths. Also, these types are shown to be different with respect to their intrinsic complexity; two of them do not contain function classes that are “most difficult” to learn, while the third one does. Moreover, we present several characterizations for these types of learning refutably. Some of these characterizations make clear where the refuting ability of the corresponding learning machines comes from and how it can be realized, in general.For learning with anomalies refutably, we show that several results from standard learning without refutation stand refutably. From this we derive some hierarchies for refutable learning. Finally, we prove that in general one cannot trade stricter refutability constraints for more liberal learning criteria

    Editors' Introduction to [Algorithmic Learning Theory: 21st International Conference, ALT 2010, Canberra, Australia, October 6-8, 2010. Proceedings]

    No full text
    Learning theory is an active research area that incorporates ideas, problems, and techniques from a wide range of disciplines including statistics, artificial intelligence, information theory, pattern recognition, and theoretical computer science. The research reported at the 21st International Conference on Algorithmic Learning Theory (ALT 2010) ranges over areas such as query models, online learning, inductive inference, boosting, kernel methods, complexity and learning, reinforcement learning, unsupervised learning, grammatical inference, and algorithmic forecasting. In this introduction we give an overview of the five invited talks and the regular contributions of ALT 2010

    Learning Recursive Functions Refutably

    Get PDF
    Learning of recursive functions refutably means that for every recursive function, the learning machine has either to learn this function or to refute it, i.e., to signal that it is not able to learn it. Three modi of making precise the notion of refuting are considered. We show that the corresponding types of learning refutably are of strictly increasing power, where already the most stringent of them turns out to be of remarkable topological and algorithmical richness. All these types are closed under union, though in different strengths. Also, these types are shown to be different with respect to their intrinsic complexity; two of them do not contain function classes that are “most difficult” to learn, while the third one does. Moreover, we present characterizations for these types of learning refutably. Some of these characterizations make clear where the refuting ability of the corresponding learning machines comes from and how it can be realized, in general. For learning with anomalies refutably, we show that several results from standard learning without refutation stand refutably. Then we derive hierarchies for refutable learning. Finally, we show that stricter refutability constraints cannot be traded for more liberal learning criteria

    Learning via Queries with Teams and Anomalies

    Get PDF
    Most work in the field of inductive inference regards the learning machine to be a passive recipient of data. In a prior paper the passive approach was compared to an active form of learning where the machine is allowed to ask questions. In this paper we continue the study of machines that ask questions by comparing such machines to teams of passive machines. This yields, via work of Pitt and Smith, a comparison of active learning with probabilistic learning. Also considered are query inference machines that learn an approximation of what is desired. The approximation differs from the desired result in finitely many anomalous places
    corecore